Computational study of the step size parameter of the subgradient optimization method

نویسنده

  • Mengjie Han
چکیده

The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton’s method and can be applied to a wider variety of problems. It also converges when the objective function is nondifferentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter α.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Practical Performance of Subgradient Computational Techniques for Mesh Network Utility Optimization

In the networking research literature, the problem of network utility optimization is often converted to the dual problem which, due to nondifferentiability, is solved with a particular subgradient technique. This technique is not an ascent scheme, hence each iteration does not necessarily improve the value of the dual function. This paper examines the performance of this computational techniqu...

متن کامل

Investigation of stepped planning hull hydrodynamics using computational fluid dynamics and response surface method

The use of step at the bottom of the hull is one of the effective factors in reducing the resistance and increasing the stability of the Planning hull. The presence of step at the bottom of this type of hulls creates a separation in the flow, which reduces the wet surface on the hull, thus reducing the drag on the body, as well as reducing the dynamic trim. In this study, a design space was cre...

متن کامل

Using Local Surrogate Information in Lagrangean Relaxation: an Application to Symmetric Traveling Salesman Problems

The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound for new heurist...

متن کامل

Using logical surrogate information in Lagrangean relaxation: An application to symmetric traveling salesman problems

The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem, which has been intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound...

متن کامل

Incremental Stochastic Subgradient Algorithms for Convex Optimization

This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013